Behavioral Clonning

This notebook is for Udacity's Behavioral Clonning Project

Load the Data

Here I load the data provided by Udacity

(as an alternative there is also another set of data)

The data is loaded in the current directory of the Google Cloud Machine

Import the necessary libraries

Get the Data into a Dataframe ready to be used, and show some statistics

As you can see the data is pretty much accumulated in the center, so I limit the number of data per bin

Seen this data we can see that the data is still pretty much inclined toward the center. I suspect that this will make driving in close turns a bit difficult, and I will need to supplement this later by augmenting the data

Getting Training and Validation Data

First we get arrays of the images and the steering data

Now we have to get our Train and validation data

Image Preprocessing

Let's take any image, for example element 50

Image Augmentation

I am going to perform a series of operations to augment the data

Zoom

Batch Generator

Incorporate Data Augmentation

For example if I take a batch of 1 I have

The Model

I am going to implement NVIDIA model depicted in End-to-End Deep Learning for Self-Driving Cars

First let's check the input to verify it is in accordance to the shape of the Nvidia Model (66,200,3):

To train the model I use fit_generator

(not used from here)